本文概述了与CRAC 2022研讨会相关的多语言核心分辨率的共享任务。共同的任务参与者应该开发能够识别提及并根据身份核心重点聚集的训练系统。Corefud 1.0的公共版本包含10种语言的13个数据集,被用作培训和评估数据的来源。先前面向核心共享任务中使用的串联分数用作主要评估度量。5个参与团队提交了8个核心预测系统;此外,组织者在共享任务开始时提供了一个基于竞争变压器的基线系统。获胜者系统的表现优于基线12个百分点(就所有语言的所有数据集而言,在所有数据集中平均得分)。
translated by 谷歌翻译
Sigmorphon 2022关于词素分割的共享任务挑战了将单词分解为一系列词素的系统,并涵盖了大多数类型的形态:化合物,衍生和弯曲。子任务1,单词级词素细分,涵盖了9种语言的500万个单词(捷克,英语,西班牙语,匈牙利语,法语,意大利语,俄语,拉丁语,蒙古语),并收到了7个团队的13个系统提交,最佳系统平均为97.29%F1在所有语言中得分,英语(93.84%)到拉丁语(99.38%)。子任务2,句子级的词素细分,涵盖了3种语言的18,735个句子(捷克,英语,蒙古人),从3个团队中收到10个系统提交,最好的系统优于所有三种最先进的子字体化方法(BPE(BPE),Ulm,Morfessor2)绝对30.71%。为了促进错误分析并支持任何类型的未来研究,我们发布了所有系统预测,评估脚本和所有黄金标准数据集。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译